-
Notifications
You must be signed in to change notification settings - Fork 873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add error parameter to EndTimeExtractor and AttributesExtractor#onEnd() #3988
Add error parameter to EndTimeExtractor and AttributesExtractor#onEnd() #3988
Conversation
if (error != null) { | ||
error = errorCauseExtractor.extractCause(error); | ||
span.recordException(error); | ||
} | ||
|
||
UnsafeAttributes attributesBuilder = new UnsafeAttributes(); | ||
for (AttributesExtractor<? super REQUEST, ? super RESPONSE> extractor : attributesExtractors) { | ||
extractor.onEnd(attributesBuilder, request, response); | ||
extractor.onEnd(attributesBuilder, request, response, error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering a little bit about passing original error vs passing the extracted error here.
do you think ErrorCauseExtractor might be used/useful for suppressing "expected" exceptions from being recorded? (and maybe still wanting to add a tag that there was an error, just wanting to suppress the big stack trace from getting recorded?)
or maybe there's a better mechanism for this use case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you think ErrorCauseExtractor might be used/useful for suppressing "expected" exceptions from being recorded?
I think that'd be a misuse of the API. It was originally meant to strip out those "meaningless" exceptions that wrap over the "real" ones, like ExecutionException
.
The stacktrace issue seems to be a real one though, we have #431 in our backlog. Perhaps we could think about an additional configuration knob on the InstrumenterBuilder
for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ya, that makes sense to me 👍
Hey @mateuszrzeszutek ,thank you for resolve the problem in #3730 .But...I do not understand the question about the label value of a metrics. |
In a practical scenario, in the monitoring process of the Lettuce client, commands and parameters are usually concatenated together as span labels:
Then, attributes must have a key named "sql.statement", but this value is not suitable for a label of redis metric, because the cardinality is too large and "mget" should be used instead. |
@mateuszrzeszutek this one has merge conflict |
d778788
to
596762b
Compare
Yeah that's true. In case of metrics, the View API should solve that problem, but we don't really have anything like that for spans (well, you can remove attributes in the collector I guess).
I wanted to avoid passing the response & exception everywhere. I think that those
That's why almost all (if not all) database clients' instrumentations emit |
@mateuszrzeszutek It might be given a full URL like "http://10.x.x.x/user/123456?sign=xxxxx&...", but it's not appropriate for metrics, which typically require a tag like "/user/{id}" (perhaps splicing together the service name to buid a new tag called "router", perhaps a slightly inappropriate example, but I mean the loss of the original request object may also lose the accuracy of the metric tag and even increase the difficulty of obtaining the tag value.) |
The goal of instrumentation is to collect data - how the data gets transformed aside from that is signal specific. Traces generally follow a pattern of full data on a sample of traces. So spans just accept the attributes as is. Metrics instead don't sample but aggregate - the metric view configuration configures the aggregation. Just like they aggregate points to a sum, the views will also be used to pick correct attributes - attribute selection is a sort of aggregation after all. The intention is definitely not to record high cardinality attributes. But the metrics SDK is currently in its infancy so we are still working on that piece of machinery. Hopefully that makes sense. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mapping error to semantic attributes makes a lot of sense.
Less sure about end time ;) I guess it's symmetry but don't expect that to ever be useful.
Yeah, I added it there mostly for symmetry - honestly I have no idea how you can extract a useful end time from an exception. |
is the symmetry "everywhere we pass RESPONSE, we also pass error"? |
Yep, exactly that. |
...on-api/src/main/java/io/opentelemetry/instrumentation/api/instrumenter/EndTimeExtractor.java
Show resolved
Hide resolved
1c8c9f7
to
757a92a
Compare
I've added a
@Nullable Throwable error
parameter toEndTimeExtractor
andAttributesExtractor#onEnd()
. Now, along with theSpanStatusExtractor
, every method in the Instrumenter API that accepts aRESPONSE
also accepts aThrowable
. The operation can end with either aRESPONSE
or anerror
being passed to the instrumenter so I thought it makes sense to always pass both of these to have a complete view of how it ended.This PR resolves #3730. It is now possible to extract custom attributes in an
AttributesExtractor
and they'll be passed along to theRequestListener
, thus making it possible to use those attributes in custom metrics.